Goto

Collaborating Authors

 international treaty


UK signs first international treaty to implement AI safeguards

The Guardian

The UK government has signed the first international treaty on artificial intelligence in a move that aims to prevent misuses of the technology, such as spreading misinformation or using biased data to make decisions. Under the legally binding agreement, states must implement safeguards against any threats posed by AI to human rights, democracy and the rule of law. The treaty, called the framework convention on artificial intelligence, was drawn up by the Council of Europe, an international human rights organisation, and was signed on Thursday by the EU, UK, US and Israel. The justice secretary, Shabana Mahmood, said AI had the capacity to "radically improve" public services and "turbocharge" economic growth, but that it must be adopted without affecting basic human rights. "This convention is a major step to ensuring that these new technologies can be harnessed without eroding our oldest values, like human rights and the rule of law," she said.


Artificial intelligence is breaking patent law

#artificialintelligence

In 2020, a machine-learning algorithm helped researchers to develop a potent antibiotic that works against many pathogens (see Nature https://doi.org/ggm2p4; Artificial intelligence (AI) is also being used to aid vaccine development, drug design, materials discovery, space technology and ship design. Within a few years, numerous inventions could involve AI. This is creating one of the biggest threats patent systems have faced. Patent law is based on the assumption that inventors are human; it currently struggles to deal with an inventor that is a machine.


Human decisions still needed in artificial intelligence for war

#artificialintelligence

US President Joe Biden should not heed the advice of the National Security Commission on Artificial Intelligence (NSCAI) to reject calls for a global ban on autonomous weapons. Instead, Biden should work on an innovative approach to prevent humanity from relinquishing its judgment to algorithms during war. The NSCAI maintains that a global treaty that prohibits the development, deployment and use of artificial intelligence (AI) enabled weapons systems is not in the interests of the United States and would harm international security. It argues that Russia and China are unlikely to follow such a treaty. A global ban, it argues, would increase pressure on law-abiding nations and would enable others to utilise AI military systems in an unsafe and unethical manner.


Death by algorithm: the age of killer robots is closer than you think

#artificialintelligence

A conquering army wants to take a major city but doesn't want troops to get bogged down in door-to-door fighting as they fan out across the urban area. Instead, it sends in a flock of thousands of small drones, with simple instructions: Shoot everyone holding a weapon. A few hours later, the city is safe for the invaders to enter. This sounds like something out of a science fiction movie. But the technology to make it happen is mostly available today -- and militaries worldwide seem interested in developing it. Experts in machine learning and military technology say it would be technologically straightforward to build robots that make decisions about whom to target and kill without a "human in the loop" -- that is, with no person involved at any point between identifying a target and killing them.


Fully autonomous 'killer robots' could be here within a YEAR, claims expert

Daily Mail - Science & tech

Killer robots could be on battlefields within a year if the UN fails to arrange an international treaty limiting their development. That's the claim of Professor Noel Sharkey, who says early wartime machines could cause mass deaths and they will not be able to tell the difference between enemies and civilians. His comments come as 120 United Nations member states meet this week at the Palais des Nations complex in Geneva to continue talks on the future challenges posed by lethal autonomous weapons system. Dr Noel Sharkey (right) is pictured here with Nobel Peace Laureate Jody Williams (left) campaigning for a ban on fully autonomous weapons. He believes an international treaty banning them is'vitally important' at a UN conference in Geneva this week Dr Noel Sharkey, a Professor of AI and Robotics as well as a Professor of Public Engagement at the University of Sheffield, told MailOnline that an international treaty banning the use of fully autonomous killer robots is'vitally important'.


Is an AI Arms Race Inevitable? - Future of Life Institute

#artificialintelligence

AI Arms Race Principle: An arms race in lethal autonomous weapons should be avoided.* Perhaps the scariest aspect of the Cold War was the nuclear arms race. At its peak, the US and Russia held over 70,000 nuclear weapons, only a fraction of which could have killed every person on earth. As the race to create increasingly powerful artificial intelligence accelerates, and as governments increasingly test AI capabilities in weapons, many AI experts worry that an equally terrifying AI arms race may already be under way. In fact, at the end of 2015, the Pentagon requested $12-$15 billion for AI and autonomous weaponry for the 2017 budget, and the Deputy Defense Secretary at the time, Robert Work, admitted that he wanted "our competitors to wonder what's behind the black curtain."